专利摘要:
The present invention relates to a memory access method of an embedded system, and in particular, by selectively disabling the secondary cache table in the secondary cache table only for a region that can be simultaneously accessed by CPU and DMA. It is designed to improve system performance while minimizing memory loss by allowing cache sharing while allowing memory sharing. The present invention comprises the steps of determining the memory area that needs to distinguish the cache or access rights in the memory area in which the primary cache table for memory management in 1MB units, and dividing the area of the memory determined in the predetermined unit And creating a secondary cache table for memory management in units of 4 KB with respect to the memory regions divided into predetermined units, and determining an accessible region of the DM among the memory regions in which the secondary cache table is created. And setting the non-cacheable in the secondary cache table only for the accessible region of the DM.
公开号:KR19990033115A
申请号:KR1019970054372
申请日:1997-10-23
公开日:1999-05-15
发明作者:이강원
申请人:구자홍;엘지전자 주식회사;
IPC主号:
专利说明:

How to Access Memory in an Embedded System
The present invention relates to an embedded system, and more particularly, to a memory access method of an embedded system.
Most recently announced RISC type CPUs include a memory management unit (MMU) to perform virtual memory and memory cache.
However, in an embedded system, other devices (DMA controllers, ECP controllers) must be able to access the memory in addition to CPU, and thus have a conventional memory management unit (MMU) that supports only cache for CPU. One system could not be used as is.
In addition, even if memory caching is expected to achieve 30 to 40% performance, memory management is not usually used, and memory management is more expensive in embedded systems equipped with commercially available real-time operating systems (OS). The device (MMU) is not supported.
In a system with a general purpose DMA, the DMA accesses a particular memory independently of the CPU.
Therefore, a case where the contents of the actual memory read in the cache memory of the CPU may be updated by the DMA.
That is, as shown in the exemplary diagram of FIG. 1, the value (X = 100) stored in the main memory by the DMA may be updated to 'X = 50'.
This problem is typically solved with hardware such as a snoopy cache controller or a memory monitor.
However, in an embedded system where the cost of hardware has to be reduced, it is not a suitable solution.
In order to solve this problem in software, memory areas that can be accessed by both DMA and CPU can be cached so that the memory cache and write buffer can be used for the existing ARM7 CPU. We have developed an ARM710a CPU with a memory management unit (MMU).
A common way to use a memory management unit (MMU) in an ARM710a CPU is to divide the 32-bit address space (4GB) into 1MB increments to configure whether to cache, write buffers, and access rights.
That is, the ARM710a CPU may have a primary cache table as shown in FIG. 2 that can manage memory in units of 1 MB, and a secondary cache table as shown in FIG. 3 that can manage memory in units of 4 KB. Use to set whether to cache the memory and to use the write buffer.
Since the size of the primary cache table is 16KB and the size of the secondary cache table is 1KB per 1MB, setting the secondary cache table for the entire address area requires 4MB of memory.
Therefore, when using the cache in the ARM710a CPU (CPU) it is common to use only the primary cache table as shown in FIG.
In general, however, the embedded system is usually limited to 1 ~ 2MB of total memory. In this case, the cache cannot be used at all when using DMA. There is a problem.
Therefore, in order to improve the conventional problem, the present invention selectively sets a non-cacheable value in a secondary cache table only for a region that can be accessed by CPU and DMA simultaneously. Its purpose is to provide a memory access method of the built-in system, which allows the sharing while allowing the cache to be used to minimize the loss of memory and improve the performance of the system.
1 is an exemplary view showing a conventional memory value update.
Figure 2 is an exemplary view showing the configuration of a general primary cache table.
3 is an exemplary view showing the configuration of a general secondary cache table.
Figure 4 is an exemplary view showing the configuration of the cache table in the present invention.
Figure 5 is an exemplary view showing the format of the primary cache table in the present invention.
6 is an exemplary view showing the format of a secondary cache table in the present invention.
Explanation of symbols on the main parts of the drawings
201: primary cache table 202: secondary cache table
The present invention provides a method of accessing memory by creating a primary cache table for memory management in units of 1 MB and a secondary cache table for memory management in units of 4 KB in order to achieve the above object. Determining a memory area in which a cache or access authority needs to be distinguished from the memory area; dividing the memory area determined by the predetermined unit; and a secondary cache table for the memory area divided by the predetermined unit. Determining the accessible region of the DM in the memory region in which the secondary cache table is created, and setting the non-cacheable in the secondary cache table only for the accessible region of the DM. Characterized in that performing.
A first feature of the present invention is to perform memory management purely in software without using hardware.
A second feature of the present invention is to minimize waste of memory by maintaining the table only for the memory actually used in using the secondary cache table.
A third aspect of the present invention is to allow memory sharing between the DMA and the CPU, and the sharing area disables the cache to reduce the complexity and correlation between modules.
A fourth aspect of the present invention is to remove parts that may affect the real-time operating system to operate independently of the real-time operating system (OS).
Hereinafter, the present invention will be described in detail with reference to the drawings.
FIG. 4 is a diagram illustrating a configuration of a cache table for an embodiment of the present invention. As shown in FIG. 4, an address of a secondary cache table or an actual memory address for a memory area partitioned in units of 1 MB may be stored in a primary cache table ( 201) and divides the table list 201-1 into areas of the primary cache table 201 that need to distinguish cache or access authority among the lists 201-1 to 201-4 in units of 4 KB. A list 202 of the secondary cache table 202 for the memory area partitioned in the 4KB unit in which the actual memory address of the allocated memory area is given and the cache or access right is given. -3,202-5) to disable caching and cacheable to the list 202-1, 202-2, 202-4, 202-6 of the secondary cache table 202 for areas where no cache or access rights are granted. To access the zone Configure.
Referring to the operation and effect of the embodiment of the present invention configured as described above are as follows.
First, the present invention divides the memory in units of 1MB and records the address of the secondary cache table or the actual memory address for the partitioned area in the list 201-1 to 201-4 of the primary cache table 201. In the list 201-1, the address of the secondary cache table 202 is recorded to access an area to which a cache or access right is given, and the list 201-2 to 201-4 is given a cache or access right. Record the actual memory address for accessing unclaimed areas.
Thereafter, the region to which the cache or the access right is given is divided into 4 KB units, and the actual memory address of the divided region is recorded in the secondary cache table 202.
If the portion of the memory area that needs to be managed separately is '1MB', only the memory corresponding to the size of the secondary cache table 202 is '1KB' is used.
In this case, in the lists 202-1 to 202-6 of the secondary cache table 202, cacheable or non-cacheable may be set according to whether the DMA is accessible. The lists 202-1, 202-2, and 202 may be set. -4,202-6 records the real addresses and cacheables of areas that are inaccessible by DMAs, and lists 202-3 and 202-5 list the real addresses of areas that are inaccessible by DMAs. And uncacheable are logged.
As a result, the memory that is not accessible by the DMA can be accessed by using a cache and a write buffer.
That is, the present invention maintains the secondary cache table 202 only for the regions where the memory or the access rights need to be divided by dividing the memory region, as shown in the exemplary diagram of FIG. It is to map the real memory directly in the table 201.
If the address of DRAM is '1MB' from '0x00000000' to '0x00100000' and only the first '512KB' is cacheable, the table can be set as follows.
First, start addresses of the primary cache table 201 and the secondary cache table 202 are set as follows.
Starting address of the primary cache table: 0x60000000 (ROM)
Starting address of secondary cache table: 0x60004000 (just after the primary cache table)
Thereafter, whether to cache the entire address space is set in the primary cache table 201 having the format as shown in FIG. 5, and an example thereof is as follows.
0x60000000: 0x60004011; See Secondary Cache Table-First 1 MB
0x60000004: 0x0010001e; Unreferenceable Area-No Physical Memory
0x60000008: 0X0020001e; Unreferenceable Area-No Physical Memory
Finally, whether or not to cache an address space requiring detailed partitioning is set in the secondary cache table 202 having the format as shown in FIG. 6.
0x60000400: 0x0000000e; Cacheable, buffers available-first 4KB
0x60000404: 0x00001002; Uncacheable, buffer not available-next 4KB
Therefore, if the memory management unit (MMU) is enabled when initializing the system after setting the primary and secondary cache tables 201 and 202 as described above, the cache may be removed without any separate operation in the real-time operating system (OS). Optionally available.
As described in detail above, the present invention can selectively perform the memory cache, thereby eliminating the additional cost of having additional hardware in the embedded system and minimizing the loss of memory, thereby improving the performance of the system.
In addition, if the present invention is applied to the initialization routine of the system, the cache and the write buffer can be used regardless of the type of the real-time operating system (OS), so that the system development can be made flexible.
权利要求:
Claims (1)
[1" claim-type="Currently amended] A method of accessing memory by creating a primary cache table for memory management in units of 1 MB and a secondary cache table for memory management in units of 4 KB, in which a cache or an access right needs to be distinguished from a memory area in which the primary cache table is created. A first step of determining a memory area in which there is a memory; a second step of creating a secondary cache table with respect to the memory area divided into predetermined units; and accessing a DM in the memory area in which the secondary cache table is created. Performing a third step of determining an available area and a fourth step of setting non-cacheable in the secondary cache table only for the accessible area of the DM.
类似技术:
公开号 | 公开日 | 专利标题
US9262334B2|2016-02-16|Seamless application access to hybrid main memory
JP5735070B2|2015-06-17|Guest address to host address translation for devices to access memory in partitioned systems
TWI515563B|2016-01-01|Memory management for a hierarchical memory system
TWI651620B|2019-02-21|For more information on transaction processing system and method for processing
US9064330B2|2015-06-23|Shared virtual memory between a host and discrete graphics device in a computing system
TWI461911B|2014-11-21|Apparatus, method, and system for implementing micro page tables
KR101598353B1|2016-02-29|translation bypass in multi-stage address translation
JP5379203B2|2013-12-25|Synchronization of translation lookaside buffer with extended paging table
US6725289B1|2004-04-20|Transparent address remapping for high-speed I/O
US5325499A|1994-06-28|Computer system including a write protection circuit for preventing illegal write operations and a write poster with improved memory
JP4975175B2|2012-07-11|Method and apparatus for providing independent logical address spaces and access management for each
US4933835A|1990-06-12|Apparatus for maintaining consistency of a cache memory with a primary memory
US7596654B1|2009-09-29|Virtual machine spanning multiple computers
DE112005001798B4|2018-09-06|Managing processor resources during architectural events
US6408373B2|2002-06-18|Method and apparatus for pre-validating regions in a virtual addressing scheme
ES2357802T3|2011-04-29|Storage invalidation, delete of elements of the intermediate memory.
EP0797149B1|2001-10-10|Architecture and method for sharing tlb entries
US20130219112A1|2013-08-22|Managing memory systems containing components with asymmetric characteristics
JP2017517040A|2017-06-22|Disabling stored address translation
US5623632A|1997-04-22|System and method for improving multilevel cache performance in a multiprocessing system
JP2662603B2|1997-10-15|Method and apparatus for filtering invalidation requests
US6826653B2|2004-11-30|Block data mover adapted to contain faults in a partitioned multiprocessor system
US8327112B2|2012-12-04|Processing system implementing variable page size memory organization using a multiple page per entry translation lookaside buffer
US7383374B2|2008-06-03|Method and apparatus for managing virtual addresses
US9639476B2|2017-05-02|Merged TLB structure for multiple sequential address translations
同族专利:
公开号 | 公开日
KR100455116B1|2004-12-30|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
法律状态:
1997-10-23|Application filed by 구자홍, 엘지전자 주식회사
1997-10-23|Priority to KR10-1997-0054372A
1999-05-15|Publication of KR19990033115A
2004-12-30|Application granted
2004-12-30|Publication of KR100455116B1
优先权:
申请号 | 申请日 | 专利标题
KR10-1997-0054372A|KR100455116B1|1997-10-23|1997-10-23|How to Access Memory in an Embedded System|
[返回顶部]